0%

Introduction to Time Series Analysis - 05

Introduction to Time Series Analysis - 04

This note is for course MATH 545 at McGill University.

Lecture 13 - Lecture 15


Durbin-Levinson Algorithm

In this algorithm, coefficients can be computed recursively.

Assume {Xt}\{X_t\} is stationary and E(Xt)=0E(X_t)=0.

Algorithm

Compute coefficients ϕn1,,ϕnn\phi_{n1}, \cdots, \phi_{nn} at step nn of the algorithm

Goal is to predict Xn+1X_{n+1}

At step nn, best linear predictor is of the form PnXn+1=j=1najXNj+1P_nX_{n+1} = \sum^n_{j=1}a_jX_{N-j+1}

ϕnn=[γ(n)j=1n1ϕn1,jγ(nj)]1νn1\phi_{nn} = [\gamma(n) - \sum^{n-1}_{j=1}\phi_{n-1, j}\gamma(n-j)]^{-1}\nu_{n-1}

[ϕn1ϕn,n1]=[ϕn1,1ϕn1,n1]ϕnn[ϕn1,n1ϕn1,1]\begin{bmatrix} \phi_{n1} \\ \cdots \\ \phi_{n,n-1} \end{bmatrix}= \begin{bmatrix} \phi_{n-1,1} \\ \cdots \\ \phi_{n-1,n-1} \end{bmatrix} -\phi_{nn} \begin{bmatrix} \phi_{n-1, n-1} \\ \cdots \\ \phi_{n-1, 1} \end{bmatrix}

and νn=νn1[1ϕnn2]\nu_n = \nu_{n-1}[1-\phi^2_{nn}]

ϕ11=γ(1)γ(0)=ρ(1)\phi_{11}=\frac{\gamma(1)}{\gamma(0)} = \rho(1), ν(0)=γ(0)\nu(0) = \gamma(0)

Now let Rn=1γ(0)ΓnR_n = \frac{1}{\gamma(0)}\Gamma_n, ρ1:n=(ρ(1),,ρ(n))\underset{\sim}{\rho_{1:n}} = (\rho(1), \cdots, \rho(n)), and ρn:1=ρ1:nR=(ρ(n),,ρ(1))\underset{\sim}{\rho_{n:1}} = \underset{\sim}{\rho_{1:n}^R} = (\rho(n), \cdots, \rho(1)). (Note: superscript RR here means ‘reverse’)

If Γnϕn=γ(n)\Gamma_n \underset{\sim}{\phi_n} = \underset{\sim}{\gamma(n)}, then dividing by γ(0)\gamma(0), we have Rnϕn=ρ1:n(n)R_n \underset{\sim}{\phi_n} = \underset{\sim}{\rho_{1:n}}(n)

(Using induction to prove)

R1ϕ11=ρ(1)R_1 \phi_{11} = \rho(1) because R1=1R_1 = 1

Assume Rkϕk=ρ1:kR_k \underset{\sim}{\phi_k} = \underset{\sim}{\rho_{1:k}} holds, we need to show Rk+1ϕk+1=ρ1:(k+1)R_{k+1} \underset{\sim}{\phi_{k+1}} = \underset{\sim}{\rho_{1:(k+1)}}

\begin{align} R_{k+1}\underset{\sim}{\phi_{k+1}} &= \begin{bmatrix} R_k & \underset{\sim}{\rho_{k:1}} \\ \underset{\sim}{\rho_{k:1}^T} & 1 \end{bmatrix} \begin{bmatrix} \underset{\sim}{\phi_{k}^R} - \phi_{k+1,k+1}\underset{\sim}{\phi_{k}} \\ \phi_{k+1, k+1} \end{bmatrix}\\ &= \begin{bmatrix} R_k(\underset{\sim}{\phi_{k}} - \phi_{k+1,k+1}\underset{\sim}{\phi_{k}^R}) + \underset{\sim}{\rho_{k:1}}\phi_{k+1,k+1} \\ \underset{\sim}{\rho_{k:1}^T}\underset{\sim}{\phi_{k}} - \underset{\sim}{\rho_{k:1}^T}\underset{\sim}{\phi_{k}^R}\phi_{k+1, k+1} + \phi_{k+1, k+1} \end{bmatrix} \\ &= \begin{bmatrix} R_k(\underset{\sim}{\phi_{k}} - \phi_{k+1,k+1}\underset{\sim}{\phi_{k}^R}) + \underset{\sim}{\rho_{k:1}}\phi_{k+1,k+1} \\ \underset{\sim}{\rho_{k:1}^T}\underset{\sim}{\phi_{k}} + \phi_{k+1, k+1} (1-\underset{\sim}{\rho_{k:1}^T}\underset{\sim}{\phi_{k}^R}) \end{bmatrix} \\ &= \begin{bmatrix} \underset{\sim}{\rho_{1:k}} + \phi_{k+1,k+1}(\underset{\sim}{k:1} - R_k\underset{\sim}{\phi_k^R}) \\ ? \end{bmatrix} \\ &= \begin{bmatrix} \underset{\sim}{\rho_{1:k}} \\ ? \end{bmatrix}\end{align}

To calculate ??, we know that:

\begin{align} \nu_k &= E((X_{n+1} - X_{n:(n-k)})^2) \\ &= \gamma(0) - \underset{\sim}{\phi_k^T}\underset{\sim}{\gamma_{k:1}} \\ &=\gamma(0)(1-\underset{\sim}{\phi_k^T}\underset{\sim}{\rho_{1:k}}) \end{align}

So that ϕk+1,k+1=(γ(k+1)γk:1Tϕk)νk=(γ(k+1)γk:1Tϕk)γ(0)(1ϕkTρ1:k)\phi_{k+1,k+1} = \frac{(\gamma(k+1)-\underset{\sim}{\gamma_{k:1}^T}\underset{\sim}{\phi_k})}{\nu_k} = \frac{(\gamma(k+1)-\underset{\sim}{\gamma_{k:1}^T}\underset{\sim}{\phi_k})}{\gamma(0)(1-\underset{\sim}{\phi_k^T}\underset{\sim}{\rho_{1:k}})}

Therefore, we can calculate ?? by:

\begin{align} ? &= \underset{\sim}{\rho_{k:1}^T}\underset{\sim}{\phi_{k}} + \frac{(\gamma(k+1)-\underset{\sim}{\gamma_{k:1}^T}\underset{\sim}{\phi_k})(1-\underset{\sim}{\rho_{k:1}^T}\underset{\sim}{\phi_{k}^R})}{\gamma(0)(1-\underset{\sim}{\phi_k^T}\underset{\sim}{\rho_{1:k}})} \\ &= \underset{\sim}{\rho_{k:1}^T}\underset{\sim}{\phi_{k}} + \frac{\gamma(k+1)}{\gamma(0)} - \underset{\sim}{\rho_{k:1}^T}\underset{\sim}{\phi_{k}} \\ &= \rho(k+1) \end{align}

Then we finish the proof.

Revisit of last lecture’s formula for νn\nu_n

\begin{align} \nu_n &= E((X_{n+1} - \sum_{j=1}^n\phi_{nj}X_{n-j+1})^2) \\ &= \gamma(0) - \underset{\sim}{\phi_n^T}\underset{\sim}{\gamma_{n}} \\ &=\gamma(0) - \underset{\sim}{\phi_{n-1}^T}\underset{\sim}{\gamma_{n-1}} + \underbrace{\phi_{nn}\underset{\sim}{\phi_{n:1}^{RT}}\underset{\sim}{\gamma_{n-1}} - \phi_{nn}\gamma(n)}_{\text{recursive formula}} \\ &=\gamma(0) - \underset{\sim}{\phi_{n-1}^T}\underset{\sim}{\gamma_{n-1}} - \phi_{nn}(\gamma(n) - \phi_{1}^{RT}\underset{\sim}{\gamma_{n-1}}) \\ &=\nu_{n-1} - \phi_{nn}(\phi_nn\nu_{n-1}) \\ &=\nu_{n-1}(1-\phi_{nn}^2) \end{align}

Innovations Algorithm

Define Best Linear Predictor for XnX_n as:

X^n={0,n=1Pn1Xn,n=2,3,\hat{X}_n = \begin{cases} 0 \qquad, n=1 \\ P_{n-1X_n \quad, n=2,3,\cdots} \end{cases}

Vn=E[(Xn+1PnXn+1)2]V_n = E[(X_{n+1} - P_nX_{n+1})^2]

Define the one-step innovations (prediction errors) in the following way:

Un=XnX^nU_n = X_n - \hat{X}_n

Un=(U1,,Un)T\underset{\sim}{U_n} = (U_1, \cdots, U_n)^T

Xn=(X1,,Xn)T\underset{\sim}{X_n} = (X_1, \cdots, X_n)^T

Un=AnXn\underset{\sim}{U_n} = A_n\underset{\sim}{X_n}

An=[10000a111000a22a21100a33a32a3110]A_n = \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & \cdots \\ a_{11} & 1 & 0 & 0 & 0 & \cdots \\ a_{22} & a_{21} & 1 & 0 & 0 & \cdots \\ a_{33} & a_{32} & a_{31} & 1 & 0 & \cdots \\ \cdots & \cdots & \cdots & \cdots & \cdots & \cdots \end{bmatrix}

Then we have

U1=X1U_1 = X_1 for non-zero sequence

U2=X2+a11X1U_2 = X_2 + a_{11}X_1

U3=X3+(a21X2+a22X1)U_3 = X_3 + (a_{21}X_2 + a_{22}X_1)

Then AnA_n will be nonsingular with inverse Cn=An1C_n = A_n^{-1} where

Cn=[10000θ111000θ22θ21100θ33θ32θ3110]C_n = \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & \cdots \\ \theta_{11} & 1 & 0 & 0 & 0 & \cdots \\ \theta_{22} & \theta_{21} & 1 & 0 & 0 & \cdots \\ \theta_{33} & \theta_{32} & \theta_{31} & 1 & 0 & \cdots \\ \cdots & \cdots & \cdots & \cdots & \cdots & \cdots \end{bmatrix}

Up to now, we have Un=AnXn\underset{\sim}{U_n} = A_n\underset{\sim}{X_n}, CnUn=XnC_n\underset{\sim}{U_n} = \underset{\sim}{X_n} and X^n=XnUn\hat{X}_n = \underset{\sim}{X_n} - \underset{\sim}{U_n}, then we can calculate X^n\hat{X}_n by

\begin{align}\hat{X}_n &= C_n\underset{\sim}{U_n} - \underset{\sim}{U_n} \\ &= (C_n - I_n)\underset{\sim}{U_n} \\ &= (C_n - I_n)(\underset{\sim}{X_n} - \underset{\sim}{\hat{X}_n}) &=H_n(\underset{\sim}{X_n} - \underset{\sim}{\hat{X}_n}) \end{align}

where Hn=[00000θ110000θ22θ21000θ33θ32θ3100]H_n = \begin{bmatrix} 0 & 0 & 0 & 0 & 0 & \cdots \\ \theta_{11} & 0 & 0 & 0 & 0 & \cdots \\ \theta_{22} & \theta_{21} & 0 & 0 & 0 & \cdots \\ \theta_{33} & \theta_{32} & \theta_{31} & 0 & 0 & \cdots \\ \cdots & \cdots & \cdots & \cdots & \cdots & \cdots \end{bmatrix}

Then we calculate from the beginning

X^1=(0,,0)T(XnX^n)=0\hat{X}_1 = (0, \cdots, 0)^T(\underset{\sim}{X_n} - \underset{\sim}{\hat{X}_n}) = 0

X^2=θ11(X1X^1)=θ11X1\hat{X}_2 = \theta_{11}(\underset{\sim}{X_1} - \underset{\sim}{\hat{X}_1}) = \theta_{11}X_1

X^3=θ22(X1X^1)+θ21(X2X^2)\hat{X}_3 = \theta_{22}(\underset{\sim}{X_1} - \underset{\sim}{\hat{X}_1}) + \theta_{21}(\underset{\sim}{X_2} - \underset{\sim}{\hat{X}_2})

X^n+1={0,if n=0j=1nθnj(Xn+1jX^n+1j)\hat{X}_{n+1} = \begin{cases} 0 \qquad, \text{if } n=0 \\ \sum^n_{j=1}\theta_{nj}(X_{n+1-j} - \hat{X}_{n+1-j}) \end{cases}

Coefficients

V0=K(1,1)V_0 = K(1,1)

θn,nk=Vk1(K(n+1,n+1)j=0k+1θk,kjθn,njVj)\theta_{n,n-k} = V_k^{-1}(K(n+1, n+1) - \sum_{j=0}^{k+1}\theta_{k, k-j}\theta_{n, n-j}V_j) for 0kn10\leq k \leq n-1

Vn=K(n+1.n+1)j=0n1θn,nj2VjV_n = K(n+1. n+1) - \sum_{j=0}^{n-1}\theta_{n, n-j}^2V_j where K(i,j)=Cov(Xi,Xj)K(i, j) = Cov(X_i, X_j)